在本文中,我们提出了解决稳定性和卷积神经网络(CNN)的稳定性和视野的问题的神经网络。作为提高网络深度或宽度以提高性能的替代方案,我们提出了与全球加权拉普拉斯,分数拉普拉斯和逆分数拉普拉斯算子有关的基于积分的空间非识别算子,其在物理科学中的几个问题中出现。这种网络的前向传播由部分积分微分方程(PIDE)启发。我们在自动驾驶中测试基准图像分类数据集和语义分段任务的提出神经架构的有效性。此外,我们调查了这些密集的运营商的额外计算成本以及提出神经网络的前向传播的稳定性。
translated by 谷歌翻译
The release of ChatGPT, a language model capable of generating text that appears human-like and authentic, has gained significant attention beyond the research community. We expect that the convincing performance of ChatGPT incentivizes users to apply it to a variety of downstream tasks, including prompting the model to simplify their own medical reports. To investigate this phenomenon, we conducted an exploratory case study. In a questionnaire, we asked 15 radiologists to assess the quality of radiology reports simplified by ChatGPT. Most radiologists agreed that the simplified reports were factually correct, complete, and not potentially harmful to the patient. Nevertheless, instances of incorrect statements, missed key medical findings, and potentially harmful passages were reported. While further studies are needed, the initial insights of this study indicate a great potential in using large language models like ChatGPT to improve patient-centered care in radiology and other medical domains.
translated by 谷歌翻译
Deep learning-based 3D human pose estimation performs best when trained on large amounts of labeled data, making combined learning from many datasets an important research direction. One obstacle to this endeavor are the different skeleton formats provided by different datasets, i.e., they do not label the same set of anatomical landmarks. There is little prior research on how to best supervise one model with such discrepant labels. We show that simply using separate output heads for different skeletons results in inconsistent depth estimates and insufficient information sharing across skeletons. As a remedy, we propose a novel affine-combining autoencoder (ACAE) method to perform dimensionality reduction on the number of landmarks. The discovered latent 3D points capture the redundancy among skeletons, enabling enhanced information sharing when used for consistency regularization. Our approach scales to an extreme multi-dataset regime, where we use 28 3D human pose datasets to supervise one model, which outperforms prior work on a range of benchmarks, including the challenging 3D Poses in the Wild (3DPW) dataset. Our code and models are available for research purposes.
translated by 谷歌翻译
We propose an end-to-end inverse rendering pipeline called SupeRVol that allows us to recover 3D shape and material parameters from a set of color images in a super-resolution manner. To this end, we represent both the bidirectional reflectance distribution function (BRDF) and the signed distance function (SDF) by multi-layer perceptrons. In order to obtain both the surface shape and its reflectance properties, we revert to a differentiable volume renderer with a physically based illumination model that allows us to decouple reflectance and lighting. This physical model takes into account the effect of the camera's point spread function thereby enabling a reconstruction of shape and material in a super-resolution quality. Experimental validation confirms that SupeRVol achieves state of the art performance in terms of inverse rendering quality. It generates reconstructions that are sharper than the individual input images, making this method ideally suited for 3D modeling from low-resolution imagery.
translated by 谷歌翻译
Segmenting humans in 3D indoor scenes has become increasingly important with the rise of human-centered robotics and AR/VR applications. In this direction, we explore the tasks of 3D human semantic-, instance- and multi-human body-part segmentation. Few works have attempted to directly segment humans in point clouds (or depth maps), which is largely due to the lack of training data on humans interacting with 3D scenes. We address this challenge and propose a framework for synthesizing virtual humans in realistic 3D scenes. Synthetic point cloud data is attractive since the domain gap between real and synthetic depth is small compared to images. Our analysis of different training schemes using a combination of synthetic and realistic data shows that synthetic data for pre-training improves performance in a wide variety of segmentation tasks and models. We further propose the first end-to-end model for 3D multi-human body-part segmentation, called Human3D, that performs all the above segmentation tasks in a unified manner. Remarkably, Human3D even outperforms previous task-specific state-of-the-art methods. Finally, we manually annotate humans in test scenes from EgoBody to compare the proposed training schemes and segmentation models.
translated by 谷歌翻译
Traditionally, monocular 3D human pose estimation employs a machine learning model to predict the most likely 3D pose for a given input image. However, a single image can be highly ambiguous and induces multiple plausible solutions for the 2D-3D lifting step which results in overly confident 3D pose predictors. To this end, we propose \emph{DiffPose}, a conditional diffusion model, that predicts multiple hypotheses for a given input image. In comparison to similar approaches, our diffusion model is straightforward and avoids intensive hyperparameter tuning, complex network structures, mode collapse, and unstable training. Moreover, we tackle a problem of the common two-step approach that first estimates a distribution of 2D joint locations via joint-wise heatmaps and consecutively approximates them based on first- or second-moment statistics. Since such a simplification of the heatmaps removes valid information about possibly correct, though labeled unlikely, joint locations, we propose to represent the heatmaps as a set of 2D joint candidate samples. To extract information about the original distribution from these samples we introduce our \emph{embedding transformer} that conditions the diffusion model. Experimentally, we show that DiffPose slightly improves upon the state of the art for multi-hypothesis pose estimation for simple poses and outperforms it by a large margin for highly ambiguous poses.
translated by 谷歌翻译
Network intrusion detection systems (NIDS) to detect malicious attacks continues to meet challenges. NIDS are vulnerable to auto-generated port scan infiltration attempts and NIDS are often developed offline, resulting in a time lag to prevent the spread of infiltration to other parts of a network. To address these challenges, we use hypergraphs to capture evolving patterns of port scan attacks via the set of internet protocol addresses and destination ports, thereby deriving a set of hypergraph-based metrics to train a robust and resilient ensemble machine learning (ML) NIDS that effectively monitors and detects port scanning activities and adversarial intrusions while evolving intelligently in real-time. Through the combination of (1) intrusion examples, (2) NIDS update rules, (3) attack threshold choices to trigger NIDS retraining requests, and (4) production environment with no prior knowledge of the nature of network traffic 40 scenarios were auto-generated to evaluate the ML ensemble NIDS comprising three tree-based models. Results show that under the model settings of an Update-ALL-NIDS rule (namely, retrain and update all the three models upon the same NIDS retraining request) the proposed ML ensemble NIDS produced the best results with nearly 100% detection performance throughout the simulation, exhibiting robustness in the complex dynamics of the simulated cyber-security scenario.
translated by 谷歌翻译
由于其高质量的对象表示和有效的获取方法,3D点云吸引了越来越多的架构,工程和构建的关注。因此,文献中已经提出了许多点云特征检测方法来自动化一些工作流,例如它们的分类或部分分割。然而,点云自动化系统的性能显着落后于图像对应物。尽管这种故障的一部分源于云云的不规则性,非结构性和混乱,这使得云特征检测的任务比图像一项更具挑战性,但我们认为,图像域缺乏灵感可能是主要的。这种差距的原因。确实,鉴于图像特征检测中卷积神经网络(CNN)的压倒性成功,设计其点云对应物似乎是合理的,但是所提出的方法都不类似于它们。具体而言,即使许多方法概括了点云中的卷积操作,但它们也无法模仿CNN的多种功能检测和汇总操作。因此,我们提出了一个基于图卷积的单元,称为收缩单元,可以垂直和水平堆叠,以设计类似CNN的3D点云提取器。鉴于点云中点之间的自我,局部和全局相关性传达了至关重要的空间几何信息,因此我们在特征提取过程中还利用它们。我们通过为ModelNet-10基准数据集设计功能提取器模型来评估我们的建议,并达到90.64%的分类精度,表明我们的创新想法是有效的。我们的代码可在github.com/albertotamajo/shrinking-unit上获得。
translated by 谷歌翻译
多个现有基准测试涉及视频中的跟踪和分割对象,例如,视频对象细分(VOS)和多对象跟踪和分割(MOTS)(MOTS),但是由于使用不同的基准标准数据集和指标,它们之间几乎没有相互作用(例如J&F,J&F,J&F,J&F,地图,smotsa)。结果,已发表的作品通常针对特定的基准,并且不容易相互媲美。我们认为,可以解决多个任务的广义方法的发展需要在这些研究子社区中更大的凝聚力。在本文中,我们旨在通过提出爆发来促进这一点,该数据集包含数千个带有高质量对象掩码的视频,以及一个相关的基准标准,其中包含六个任务,涉及视频中的对象跟踪和细分。使用相同的数据和可比较的指标对所有任务进行评估,这使研究人员能够一致考虑它们,因此更有效地从不同任务的不同方法中汇集了知识。此外,我们为所有任务展示了几个基线,并证明可以将一个任务的方法应用于另一个任务,并具有可量化且可解释的性能差异。数据集注释和评估代码可在以下网址获得:https://github.com/ali2500/burst-benchmark。
translated by 谷歌翻译
我们提出了一种新的形式的傅立叶分析以及相关的信号处理概念,该信号(或数据)由边缘加权的定向无环图(DAGS)索引。这意味着我们的傅立叶基础产生了我们定义的适当的转移和卷积操作员的概念。 DAG是捕获数据之间因果关系的常见模型,而我们的框架在这​​种转变,卷积和傅立叶变换中仅是从DAG中的前辈计算出来的。傅立叶变换需要DAG的传递闭合,根据边缘重量的解释,可能会为此形式。示例包括影响水平,距离或污染分布。我们的框架与先前的GSP不同:它特定于DAG和杠杆,并扩展了Moebius反转的经典理论。对于原型应用,我们考虑DAGS建模动态网络,其中边缘会随着时间而变化。具体而言,我们对感染的传播对从现实世界接触数据获得的DAG进行建模,并从样品中学习感染信号,假设傅立叶域中的稀疏性。
translated by 谷歌翻译